34 research outputs found

    Temporal Convolution in Spiking Neural Networks: a Bio-mimetic Paradigm

    Get PDF
    Abstract Recent spectacular advances in Artificial Intelligence (AI), in large, be attributed to developments in Deep Learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of Neural Network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep Learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumph can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, Spiking Neural Networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g. in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence "deep" and "spiking" refer to fundamentally different characteristics of a neural network: "spiking" focuses on the activation of individual neurons, whereas "deep" concerns itself to the network architecture itself [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate that new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways [2]-one of the most exciting being the creation of a specific hierarchical learning paradigm in Recurrent SNN (RSNNs) called e-prop [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms - for example the “weight transport” problem - but also act as the “glue” – e.g. pun intended - between these two paradigms. This idea has direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms give hints at the direction and potential of what could be an important part of the next wave of development in AI

    Robustness of Learning That Is Based on Covariance-Driven Synaptic Plasticity

    Get PDF
    It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain. Theoretical studies have shown that a family of synaptic plasticity rules, in which synaptic changes are driven by covariance, is particularly useful for many forms of learning, including associative memory, gradient estimation, and operant conditioning. Covariance-based plasticity is inherently sensitive. Even a slight mistuning of the parameters of a covariance-based plasticity rule is likely to result in substantial changes in synaptic efficacies. Therefore, the biological relevance of covariance-based plasticity models is questionable. Here, we study the effects of mistuning parameters of the plasticity rule in a decision making model in which synaptic plasticity is driven by the covariance of reward and neural activity. An exact covariance plasticity rule yields Herrnstein's matching law. We show that although the effect of slight mistuning of the plasticity rule on the synaptic efficacies is large, the behavioral effect is small. Thus, matching behavior is robust to mistuning of the parameters of the covariance-based plasticity rule. Furthermore, the mistuned covariance rule results in undermatching, which is consistent with experimentally observed behavior. These results substantiate the hypothesis that approximate covariance-based synaptic plasticity underlies operant conditioning. However, we show that the mistuning of the mean subtraction makes behavior sensitive to the mistuning of the properties of the decision making network. Thus, there is a tradeoff between the robustness of matching behavior to changes in the plasticity rule and its robustness to changes in the properties of the decision making network

    CuBIC: cumulant based inference of higher-order correlations in massively parallel spike trains

    Get PDF
    Recent developments in electrophysiological and optical recording techniques enable the simultaneous observation of large numbers of neurons. A meaningful interpretation of the resulting multivariate data, however, presents a serious challenge. In particular, the estimation of higher-order correlations that characterize the cooperative dynamics of groups of neurons is impeded by the combinatorial explosion of the parameter space. The resulting requirements with respect to sample size and recording time has rendered the detection of coordinated neuronal groups exceedingly difficult. Here we describe a novel approach to infer higher-order correlations in massively parallel spike trains that is less susceptible to these problems. Based on the superimposed activity of all recorded neurons, the cumulant-based inference of higher-order correlations (CuBIC) presented here exploits the fact that the absence of higher-order correlations imposes also strong constraints on correlations of lower order. Thus, estimates of only few lower-order cumulants suffice to infer higher-order correlations in the population. As a consequence, CuBIC is much better compatible with the constraints of in vivo recordings than previous approaches, which is shown by a systematic analysis of its parameter dependence

    Computational modeling with spiking neural networks

    Get PDF
    This chapter reviews recent developments in the area of spiking neural networks (SNN) and summarizes the main contributions to this research field. We give background information about the functioning of biological neurons, discuss the most important mathematical neural models along with neural encoding techniques, learning algorithms, and applications of spiking neurons. As a specific application, the functioning of the evolving spiking neural network (eSNN) classification method is presented in detail and the principles of numerous eSNN based applications are highlighted and discussed

    State-Space Analysis of Time-Varying Higher-Order Spike Correlation for Multiple Neural Spike Train Data

    Get PDF
    Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand

    Evolutionary Indirect Design of Feed-Forward Spiking Neural Networks

    No full text
    corecore